generating explanation
- North America > United States > Virginia > Fairfax County > Fairfax (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks
Graph regression is a fundamental task that has gained significant attention invarious graph learning tasks. However, the inference process is often not easilyinterpretable. Current explanation techniques are limited to understanding GraphNeural Network (GNN) behaviors in classification tasks, leaving an explanation gapfor graph regression models. In this work, we propose a novel explanation methodto interpret the graph regression models (XAIG-R). Our method addresses thedistribution shifting problem and continuously ordered decision boundary issuesthat hinder existing methods away from being applied in regression tasks.
Reviews: GNNExplainer: Generating Explanations for Graph Neural Networks
The reviewers agreed that this paper presents a valuable contribution for explaining GNNs; they appreciated the quality of the writing, the overall motivation of interpretability of the models, and the strength of the empirical results. The primary remaining shortcomings that the reviewers mentioned in the reviews should be addressed as described in the response, such as expanding explanations that describe multiple edges, the significance of the hyper-parameters, description of synthetic datasets, and the additional experiments.
Reviews: GNNExplainer: Generating Explanations for Graph Neural Networks
The paper focuses on graph neural networks (GNN) that has recently gained significant attention as vital components for machine learning systems over graph structured data. Specifically, the authors build a method to analyze the predictions made by GNN and output explanations in the form of contextual subgraph and subset of features of nodes in this subgraph. The goal is to understand what parts of the graph (structure and features) were given importance by GNN model while computing predictions for node and edge. The authors achieve this by optimizing an information theoretic objective that considers the mutual information between the predictions and the relevant graph components. The authors provide methods for both single and multi-instance setting.
Review for NeurIPS paper: Towards Interpretable Natural Language Understanding with Explanations as Latent Variables
Weaknesses: My main concern is about how explanations are being employed as latent variables. I had assumed based on the introduction that the final predictor would factor through the final explanation. This would provide the faithfulness guarantee that two inputs which produce the same explanation would produce the same output label. However, it seems that during training, the explanation is conditioned on the gold label. The paper points out on L161 that "generating explanations without a predicted label often results in irrelevant and even misleading explanations."
GNNExplainer: Generating Explanations for Graph Neural Networks
Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs.GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models, and explaining predictions made by GNNs remains unsolved. Here we propose GNNExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GNNExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GNNExplainer can generate consistent and concise explanations for an entire class of instances.
A generative framework to bridge data-driven models and scientific theories in language neuroscience
Antonello, Richard, Singh, Chandan, Jain, Shailee, Hsu, Aliyah, Gao, Jianfeng, Yu, Bin, Huth, Alexander
However, these models are not scientific theories that describe the world in natural language. Instead, they are implemented in the form of vast neural networks with millions or billions of largely inscrutable parameters. One emblematic field is language neuroscience, where large language models (LLMs) are highly effective at predicting human brain responses to natural language, but are virtually impossible to interpret or analyze by hand [4-10]. To overcome this challenge, we introduce the generative explanation-mediated validation (GEM-V) framework. GEM-V translates deep learning models of language selectivity in the brain into concise verbal explanations, and then designs follow-up experiments to verify that these explanations are causally related to brain activity.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > Texas > Travis County > Austin (0.14)
- North America > United States > California > Alameda County > Berkeley (0.14)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.69)
Generating Explanations for Cellular Neural Networks
Sinha, Akshit, Vennam, Sreeram, Sharma, Charu, Kumaraguru, Ponnurangam
Recent advancements in graph learning contributed to explaining predictions generated by Graph Neural Networks. However, existing methodologies often fall short when applied to real-world datasets. We introduce HOGE, a framework to capture higher-order structures using cell complexes, which excel at modeling higher-order relationships. In the real world, higher-order structures are ubiquitous like in molecules or social networks, thus our work significantly enhances the practical applicability of graph explanations. HOGE produces clearer and more accurate explanations compared to prior methods. Our method can be integrated with all existing graph explainers, ensuring seamless integration into current frameworks. We evaluate on GraphXAI benchmark datasets, HOGE achieves improved or comparable performance with minimal computational overhead. Ablation studies show that the performance gain observed can be attributed to the higher-order structures that come from introducing cell complexes.
- North America > United States > Illinois (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > India > Telangana > Hyderabad (0.04)
Uncertainty-Aware Explainable Recommendation with Large Language Models
Peng, Yicui, Chen, Hao, Lin, Chingsheng, Huang, Guo, Hu, Jinrong, Guo, Hui, Kong, Bin, Hu, Shu, Wu, Xi, Wang, Xin
Providing explanations within the recommendation system would boost user satisfaction and foster trust, especially by elaborating on the reasons for selecting recommended items tailored to the user. The predominant approach in this domain revolves around generating text-based explanations, with a notable emphasis on applying large language models (LLMs). However, refining LLMs for explainable recommendations proves impractical due to time constraints and computing resource limitations. As an alternative, the current approach involves training the prompt rather than the LLM. In this study, we developed a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2. We employed a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task. This strategy enables a more effective exploration of users' interests, improving recommendation effectiveness and user satisfaction. Through the experiments, our method achieving 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively, demonstrates superior performance over four SOTA methods in terms of explainability evaluation metric. In addition, we identified that the proposed model is able to ensure stable textual quality on the three public datasets.
- Asia > China > Sichuan Province > Chengdu (0.04)
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- North America > United States > Indiana > Marion County > Indianapolis (0.04)
- Asia > Taiwan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.89)